From:                              route@monster.com

Sent:                               Tuesday, June 04, 2013 3:54 PM

To:                                   hg@apeironinc.com

Subject:                          Please review this candidate for: Big Data

 

This resume has been forwarded to you at the request of Monster User xapeix01

Pradeep D 

Last updated:  05/06/13

Job Title:  no specified

Company:  no specified

Rating:  Not Rated

Screening score:  no specified

Status:  Resume Received


New York  10002
US

dpradeep702@gmail.com
Contact Preference:  Telephone

Quick View Links:

Resume Section

Summary Section

 

 

RESUME

  

Resume Headline: Pradeep D - Hadoop Admin /Hadoop Engineer/ Big Data Consultant

Resume Value: jcixh787a7yv9he9   

  

 

 

              Pradeep Dasari                                                    

              Sr. Hadoop Administrator/Sr. Hadoop Engineer/ Big Data Consultant                                                    

Summary:

·            Having total work experience of 7+ years in the IT Industry, around 2 years of experience with Hadoop, HDFS, MapReduce and Hadoop Ecosystem (Pig, Hive, HBase).

·            Has 3 years of experience in system administration managing windows systems and providing network support in a 24x7 environment.

·            Good Knowledge and experience in Hadoop Administration.

·            Installation and configuration of Hadoop and associated applications like HBase.

·            Hadoop performance tuning.

·            Over 2 years of experience in development of Big Data projects using Hadoop, Hadoop Ecosystem and NoSQL data stores.

·            Design Big Data solutions for traditional enterprise businesses.

·            Setting up Hadoop in various modes and integrating Hadoop Ecosystem with Hadoop.

·            Writing Map Reduce jobs using Java native code, Pig, Hive for various business use cases.

·            Working with both SQL and NoSQL Databases.

·            Experience in Object Oriented language like Java and Core Java.

·            Experience in ETL, Data warehousing.

·            Good communication and interpersonal skills.

·            Intelligent and flexible in interpersonal relations.

 

Education:

B.Tech in Jawaharlal Nehru Technological Institution INDIA

 

Technical Skills:

Hadoop                            : HDFS, Hive, Pig, Sqoop, Flume, Zookeeper and HBase

Languages                            : Core Java, JSP, Servlets, JDBC, JMS, SQL, PL/SQL, HQL, PIG Latin and Python

Web Technologies              : HTML, Ajax and JavaScript

XML Technologies              : XML and XSL

Server Side Scripting: UNIX Shell Scripting

Unit Testing                            : JUnit

ETL Tools                            : SQL *Loader, Informatica

Application Servers              : Tomcat, Websphere and Weblogic

IDE                                          : Eclipse and Oracle JDevloper 10g

Databases                            : Oracle 9i/10g and HBase

Operating Systems              : Windows NT/2000/XP, UNIX, Solaris and Linux

Version Control              : PVCS and WinCVS

Design Tool                            : MS Visio and Magic Draw

 

 

Experience:

Bank of America, Charlotte, NC                                                                                                                      Nov 2010 – Till Date

Hadoop/Big Data Engineer

Retail data sent by different LOBs (Consumer Data, Mortgage Data) is sourced into HDFS. Data cleansing and business transformations are implemented in Hadoop ecosystem. The data is provisioned to downstream systems for reporting and dash boarding purposes.

Responsibilities:

·          Installed and configured  multi-nodes fully distributed Hadoop cluster

·          Involved in installing Hadoop Ecosystem components.

·          Responsible to manage data coming from different sources

·          Involved in Hadoop Cluster environment administration that includes adding and removing cluster nodes, cluster capacity planning, performance tuning, cluster Monitoring ,Troubleshooting.

·          Supported Map Reduce Programs those are running on the cluster

·          Involved in HDFS maintenance and administering it through Hadoop-Java API.

·          Configured Fair Scheduler to provide service-level agreements for multiple users of a cluster.

·          Maintaining and monitoring clusters. Loaded data into the cluster from dynamically generated files using Flume and from relational database management systems using Sqoop.

·          Managing nodes on Hadoop cluster connectivity and security.

·          Resolved configuration issues with Apache add-on tools.

·          Involved in writing Java API's for interacting with the HBase.

·          Involved  in writing  Flume and Hive scripts to extract, transform and load the data into Database

Environment:

Hadoop, Linux, HDFS, Hive, Sqoop, Flume, Zookeeper and HBase

 

Disney, Seattle, WA                                                                                                                                                  

Hadoop Administrator/Hadoop Developer                                                                                                       Mar 2010 – Oct 2010

The project is to move Disney Products data from staging databases to HDFS. Also the legacy data stored in Mainframe was moved to HDFS system. The data available in HDFS was pushed to Data warehouse and thus to downstream requirements.

 

Responsibilities:

·          Translation of functional and technical requirements into detailed architecture and design

·          Involved in configuring multi-nodes fully distributed Hadoop cluster.

·          Supported Map Reduce Programs those are running on the cluster

·          Responsible to manage data coming from different sources

·          Importing and exporting data into HDFS and Hive using Sqoop

·          Experienced in analyzing data with Hive and Pig

·          Experienced in defining  job flows

·          Responsible for operational support of Production system

·          Loading log data directly into HDFS using Flume

·          Experienced in managing and reviewing  Hadoop log files

·          Knowledge on Machine Learning tools

·          Analyzing data with Hive, Pig and Hadoop Streaming

Environment:

Hadoop, Linux, HDFS, Hive, Sqoop, Flume, Zookeeper and HBase

 

Sr. ETL Developer                                                                                                                                                  Jul 2009 – Feb 2010

The project is to create an efficient Data Warehouse in Netezza than the data warehouse that was previously built on Teradata. This warehouse reports the historical data stored in various databases and Flat Files. Data from different sources was brought into Netezza using Informatica ETL. 

Responsibilities: 

·          Analyzed the source data coming from Oracle, Flatfile.

·          Used Informatica Designer to create mappings using different transformations to move data to a Data Warehouse. Developed complex mappings in Informatica to load the data from various sources into the Data Warehouse, using different transformations like Source Qualifier, Expression, Lookup, Aggregator, Update Strategy, Joiner, and Rank. 

·          Responsible for creating different sessions and workflows to load the data to Data Warehouse using Informatica Workflow Manager. 

·          Involved in identifying the bottlenecks in Sources, Targets & Mappings and accordingly optimized them. 

·          Worked with NZLoad to load flat file data into Netezza tables. 

·          Good understanding about Netezza architecture. 

·          Assist DBA and Architect to identify proper distribution keys for Netezza tables. 

·          Created mappings using pushdown optimization to achieve good performance in loading data into Netezza. 

·          Created and Configured Workflows, Worklets, and Sessions to transport the data to target warehouse Netezza tables using Informatica Workflow Manager. 

Environment: Informatica Power Center 8.x, Flat files, Netezza 4x, Terdata, UNIX, WinSQL & Shell Scripting

 

Amgen, Los Angeles, CA                                                                                                                 Nov 2007 - Jun 2009 
ETL Developer 
Amgen is an international biotechnology company headquartered in the Newbury Park section of Thousand Oaks, California. The sources of the data consist of feeds from applications residing on the mainframe to applications residing on Netezza and Oracle. The data is extracted from the various sources and is loaded into the staging tables from where it is consolidated and a number of measures are calculated and is finally loaded into the data mart. 

Responsibilities 

·          Worked with business analysts to identify appropriate sources for Data warehouse and to document business needs for decision support data. 

·          Designed the ETL processes using Informatica to load data from Flat Files to target Oracle Data Warehouse database. 

·          Performed data manipulations using various Informatica Transformations like Joiner, Expression, Lookup, Aggregate, Filter, Update Strategy, and Sequence Generator etc. 

·          Involved in creating logical and physical data modeling with STAR and SNOWFLAKE schema techniques using Erwin in Data warehouse as well as in Data Mart. 

·          Written SQL overrides in source Qualifier according to business requirements. 

·          Written pre session and post session scripts in mappings. 

·          Created sessions and workflow for designed mappings. 

·          Redesigned some of the existing mappings in the system to meet new functionality. 

·          Used Workflow Manager for Creating, Validating, Testing and running the sequential and concurrent Batches and Sessions and scheduling them to run at specified time with required frequency. 

·          Extensively worked in the performance tuning of the programs, ETL Procedures and processes. 
Developed PL/SQL procedures for processing business logic in the database. 

·          Migrated the mapping to the testing and production department and introduced the concepts of Informatica to the people in testing department. 

·          Produced documentation as per the company standards and SDLC. 
Environment: Informatica 7.x, Teradata, Oracle 9i, UNIX, Windows NT, UNIX Shell Programming.

 

TCS, Hyderabad, India – CISS                                                                                              Feb 2007 – Oct 2007

Module Developer, UI Designing, services development

Inventory Pro is a standard inventory management application. The application is powerfully designed to handle every aspect of inventory management, providing the ability to track every step in the inventory life cycle from the time of creating a purchase order for the supplier to the moment of shipping out the product to the customer.  The Inventory Pro allows adjusting and tracking all stored items. These items can be inventory items or assets. The inventory system allows to perform tasks online that were traditionally done on paper. Some of these tasks include: inventory adjustments, locations information, inventory issue, aging, FIFO - LIFO, predetermined locations, serial number tracking, and multiple virtual warehouses in one physical warehouse. Additionally, this system allows measuring inventory usage through inventory adjustments, items issued and working orders.

Responsibilities:

·          Involved in Coding JSP and Servlets

·          Wrote Servlets to fetch and manipulate data from the database

·          Design and development of Application user interface using Core Java, Servlets, JSP

·          Extensively involved in the development of the Shipping module, which involved Array list for this product, which was developed in Servlets.

·          Extensively used Java Collection Framework

·          Utilized Servlets to handle various requests from the client browser and send responses

·          Based on these definitions, the tool generates the XML file, which in turn is used to create               Java code and HTML templates

·          Development of services using Java, Oracle

·        developing Java applications using and deployed in Tomcat5.0 web server

·          Good understanding capability with Team members

·          Involved in the development of user interfaces

Environment: Core Java, Servlets, JSP, HTML, Ajax, Sql

 

TCS, Hyderabad, India – Kroger                                                                                          July 2006 – Jan 2007

Module Developer, UI Designing, services development

Enterprise Journal (EJ) Application is the Kroger standard interface for entering accounting record into the Kroger accounting System (KAS). Journal Entries are regularly processed by KAS, which will accept or reject the entries based on Kroger’s Business Rules. The application includes the code required to accomplish all of the functions necessary for ensuring valid accounting data is captured, processed and maintained.

Responsibilities:

·          Developed WEBGUI using tools like Dream weaver

·          Implementing  JSP pages using Eclipse

·          Developing new functionality, including both back-end and front-end parts (JSP)

·          Design And Development of Application user interface using Core Java, Servlets, JSP

·          Development of services using Java, Oracle

·          Involved in the development of user interfaces

·          developing Java applications using and deployed in Tomcat5.0 web server

Environment: Core Java, Servlets, JSP, HTML



Experience

BACK TO TOP

 

Job Title

Company

Experience

Hadoop/Big Data Engineer

Bank of America

- Present

 

Additional Info

BACK TO TOP

 

Current Career Level:

Experienced (Non-Manager)

Date of Availability:

Immediately

Work Status:

US - I am authorized to work in this country for my present employer only.

Active Security Clearance:

None

US Military Service:

Citizenship:

None

 

 

Target Job:

Target Job Title:

Hadoop/Big Data Engineer

Desired Job Type:

Intern

 

Target Company:

Company Size:

 

Target Locations:

Selected Locations:

US-NY-New York City

Relocate:

Yes

Willingness to travel:

Up to 100%